对象检测是自动驾驶中的一个全面研究的问题。但是,在鱼眼相机的情况下,它的探索相对较少。强烈的径向失真破坏了卷积神经网络的翻译不变性电感偏置。因此,我们提出了自动驾驶的木观鱼眼检测挑战,这是CVPR 2022年全向计算机视觉(OMNICV)的一部分。这是针对鱼眼相机对象检测的首批比赛之一。我们鼓励参与者设计在没有纠正的情况下对鱼眼图像的本地工作的模型。我们使用Codalab根据公开可用的Fisheye数据集主持竞争。在本文中,我们提供了有关竞争的详细分析,该分析吸引了120个全球团队的参与和1492份提交的参与。我们简要讨论获胜方法的细节,并分析其定性和定量结果。
translated by 谷歌翻译
大多数关于行人姿势估计的现有作品都不考虑估计被阻塞的行人的姿势,因为相关的汽车数据集中没有遮挡零件的注释。例如,在汽车场景中用于行人检测的众所周知的数据集Citypersons不提供姿势注释,而MS-Coco(一种非自动动物数据集)包含人体姿势估计。在这项工作中,我们提出了一个多任务框架,以通过检测和实例分割任务在这两个分布上执行。此后,编码器使用两个分布的行人实例使用无监督的实例级适应方法来学习姿势特定的特征。提出的框架改善了姿势估计,行人检测和实例分割的最新性能。
translated by 谷歌翻译
关键点检测和描述是计算机视觉系统中常用的构建块,特别是用于机器人和自主驾驶。然而,大多数迄今为止的技术都集中在标准相机上,几乎没有考虑到Fisheye相机,这些摄像机通常用于城市驾驶和自动停车处。在本文中,我们提出了一种用于鱼眼图像的新型培训和评估管道。我们利用SuperPoint作为我们的基线,这是一个自我监督的Keypoint检测器和描述符,该探测器和描述符已经实现了最先进的同位估计。我们介绍了一种Fisheye适应管道,以便在未造成的Fisheye图像上培训。我们通过在牛津机Robotcar数据集上引入用于检测可重复性和描述符的鱼眼基于评估方法来评估HPAPTES基准测试的性能。
translated by 谷歌翻译
Object detection is a comprehensively studied problem in autonomous driving. However, it has been relatively less explored in the case of fisheye cameras. The standard bounding box fails in fisheye cameras due to the strong radial distortion, particularly in the image's periphery. We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images in this work. We use the IoU metric to compare these representations using accurate instance segmentation ground truth. We design a novel curved bounding box model that has optimal properties for fisheye distortion models. We also design a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Overall, the proposed polygon model improves mIoU relative accuracy by 40.3%. It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios to the best of our knowledge. The dataset comprising of 10,000 images along with all the object representations ground truth will be made public to encourage further research. We summarize our work in a short video with qualitative results at https://youtu.be/iLkOzvJpL-A.
translated by 谷歌翻译
Reinforcement Learning (RL) algorithms are known to scale poorly to environments with many available actions, requiring numerous samples to learn an optimal policy. The traditional approach of considering the same fixed action space in every possible state implies that the agent must understand, while also learning to maximize its reward, to ignore irrelevant actions such as $\textit{inapplicable actions}$ (i.e. actions that have no effect on the environment when performed in a given state). Knowing this information can help reduce the sample complexity of RL algorithms by masking the inapplicable actions from the policy distribution to only explore actions relevant to finding an optimal policy. This is typically done in an ad-hoc manner with hand-crafted domain logic added to the RL algorithm. In this paper, we propose a more systematic approach to introduce this knowledge into the algorithm. We (i) standardize the way knowledge can be manually specified to the agent; and (ii) present a new framework to autonomously learn these state-dependent action constraints jointly with the policy. We show experimentally that learning inapplicable actions greatly improves the sample efficiency of the algorithm by providing a reliable signal to mask out irrelevant actions. Moreover, we demonstrate that thanks to the transferability of the knowledge acquired, it can be reused in other tasks to make the learning process more efficient.
translated by 谷歌翻译
Explainability has been widely stated as a cornerstone of the responsible and trustworthy use of machine learning models. With the ubiquitous use of Deep Neural Network (DNN) models expanding to risk-sensitive and safety-critical domains, many methods have been proposed to explain the decisions of these models. Recent years have also seen concerted efforts that have shown how such explanations can be distorted (attacked) by minor input perturbations. While there have been many surveys that review explainability methods themselves, there has been no effort hitherto to assimilate the different methods and metrics proposed to study the robustness of explanations of DNN models. In this work, we present a comprehensive survey of methods that study, understand, attack, and defend explanations of DNN models. We also present a detailed review of different metrics used to evaluate explanation methods, as well as describe attributional attack and defense methods. We conclude with lessons and take-aways for the community towards ensuring robust explanations of DNN model predictions.
translated by 谷歌翻译
姿势图优化是同时定位和映射问题的一种特殊情况,其中唯一要估计的变量是姿势变量,而唯一的测量值是施加间约束。绝大多数PGO技术都是基于顶点的(变量是机器人姿势),但是最近的工作以相对方式参数化了姿势图优化问题(变量是姿势之间的变换),利用最小循环基础来最大程度地提高范围的稀疏性。问题。我们以增量方式探索周期基础的构建,同时最大程度地提高稀疏性。我们验证一种算法,该算法逐渐构建稀疏循环基础,并将其性能与最小循环基础进行比较。此外,我们提出了一种算法,以近似两个图表的最小周期基础,这些图在多代理方案中常见。最后,姿势图优化的相对参数化仅限于使用SE(2)或SE(3)上的刚体变换作为姿势之间的约束。我们引入了一种方法,以允许在相对姿势图优化问题中使用低度测量值。我们对标准基准,模拟数据集和自定义硬件的算法进行了广泛的验证。
translated by 谷歌翻译
我们研究了改进的多臂匪徒(IMAB)问题,其中从手臂获得的奖励随着收到的拉力数量而增加。该模型为教育和就业等领域中的许多现实世界问题提供了优雅的抽象,在这种领域中,关于机会分配的决定可能会影响社区的未来能力以及它们之间的差异。在这种情况下,决策者必须考虑她的决策对未来奖励的影响,除了随时最大化其累积奖励的标准目标。在许多这些应用中,决策者的时间范围未知,这激发了在技术上更具挑战性的地平线环境中对IMAB问题的研究。我们研究了地平线 - 统一环境中两个看似相互冲突的目标之间产生的紧张:a)根据武器的当前奖励,在任何时候最大化累积奖励,b)确保具有更好的长期奖励的武器获得足够的机会即使他们最初的奖励很低。我们表明,令人惊讶的是,在这种情况下,这两个目标是相互对齐的。我们的主要贡献是对IMAB问题的任何时间算法,它可以获得最佳的累积奖励,同时确保武器在足够的时间内发挥其真正的潜力。由于缺乏机会,我们的算法减轻了最初的差异,并继续拉动手臂直到停止改善。我们通过证明a)imab问题的任何算法来证明我们的算法的最佳性,无论其功利主义,无论多么有效,都必须遭受$ \ omega(t)$政策后悔和$ \ omega(k)$竞争比率相对于最佳的比例离线政策和b)我们算法的竞争比率为$ O(k)$。
translated by 谷歌翻译
本文的重点是概念证明,机器学习(ML)管道,该管道从低功率边缘设备上获取的压力传感器数据中提取心率。 ML管道包括一个UPS采样器神经网络,信号质量分类器以及优化的1D横向扭转神经网络,以高效且准确的心率估计。这些型号的设计使管道小于40 kb。此外,开发了由UPS采样器和分类器组成的杂种管道,然后开发了峰值检测算法。管道部署在ESP32边缘设备上,并针对信号处理进行基准测试,以确定能量使用和推理时间。结果表明,与传统算法相比,提出的ML和杂种管道将能量和时间减少82%和28%。 ML管道的主要权衡是准确性,平均绝对误差(MAE)为3.28,而混合动力车和信号处理管道为2.39和1.17。因此,ML模型显示出在能源和计算约束设备中部署的希望。此外,ML管道的较低采样率和计算要求可以使自定义硬件解决方案降低可穿戴设备的成本和能源需求。
translated by 谷歌翻译
神经塌陷是指表征类嵌入和分类器重量的几何形状的显着结构特性,当经过零训练误差以外的训练时,深网被发现。但是,这种表征仅适用于平衡数据。因此,我们在这里询问是否可以使阶级失衡不变。为此,我们采用了不受限制的功能模型(UFM),这是一种用于研究神经塌陷的最新理论模型,并引入了单纯形编码标签的插值(SELI)作为神经崩溃现象的不变特征。具体而言,我们证明了UFM的跨凝结损失和消失的正则化,无论阶级失衡如何,嵌入和分类器总是插入单纯形编码的标签矩阵,并且其单个几何形状都由同一标签矩阵矩阵矩阵的SVD因子确定。然后,我们对合成和真实数据集进行了广泛的实验,这些实验确认了与SELI几何形状的收敛。但是,我们警告说,融合会随着不平衡的增加而恶化。从理论上讲,我们通过表明与平衡的情况不同,当存在少数民族时,山脊规范化在调整几何形状中起着至关重要的作用。这定义了新的问题,并激发了对阶级失衡对一阶方法融合其渐近优先解决方案的速率的影响的进一步研究。
translated by 谷歌翻译